许多移动制造商最近在其旗舰模型中采用了双像素(DP)传感器,以便更快的自动对焦和美学图像捕获。尽管他们的优势,由于DT在DP图像中的视差缺失的数据集和算法设计,但对3D面部理解的使用研究受到限制。这是因为子孔图像的基线非常窄,并且散焦模糊区域存在视差。在本文中,我们介绍了一种以DP为导向的深度/普通网络,该网络重建3D面部几何。为此目的,我们使用我们的多摄像头结构光系统捕获的101人拥有超过135k张图片的DP面部数据。它包含相应的地面真值3D模型,包括度量刻度的深度图和正常。我们的数据集允许建议的匹配网络广泛化,以便以3D面部深度/正常估计。所提出的网络由两种新颖的模块组成:自适应采样模块和自适应正常模块,专门用于处理DP图像中的散焦模糊。最后,该方法实现了最近基于DP的深度/正常估计方法的最先进的性能。我们还展示了估计深度/正常的适用性面对欺骗和致密。
translated by 谷歌翻译
学习估计对象姿势通常需要地面真理(GT)标签,例如CAD模型和绝对级对象姿势,这在现实世界中获得昂贵且费力。为了解决这个问题,我们为类别级对象姿势估计提出了一个无监督的域适应(UDA),称为\ textbf {uda-cope}。受到最近的多模态UDA技术的启发,所提出的方法利用教师学生自我监督的学习方案来训练姿势估计网络而不使用目标域标签。我们还在预测归一化对象坐标空间(NOCS)地图和观察点云之间引入了双向滤波方法,不仅使我们的教师网络更加强大地对目标域,而且为学生网络培训提供更可靠的伪标签。广泛的实验结果表明了我们所提出的方法的有效性,可以定量和定性。值得注意的是,在不利用目标域GT标签的情况下,我们所提出的方法可以实现与依赖于GT标签的现有方法相当或有时优越的性能。
translated by 谷歌翻译
从3D扫描获得的点云通常稀疏,嘈杂,不规则。为应对这些问题,最近的研究已经单独进行,以致密化,去噪和完全不准确的点云。在本文中,我们倡导联合解决这些任务导致点云重建的显着改进。为此,我们提出了一个由两个阶段组成的深点云重建网络:1)3D稀疏的堆积空间,如初始致密化和去噪,2)通过变压器将离散体素转换为3D点的改进。特别是,我们通过一种名为放大位置编码的新提出的模块进一步提高了变压器的性能。该模块旨在基于用于自适应改进的点距离来不同地放大位置编码矢量的大小。广泛的实验表明,我们的网络在Scannet,ICL-Nuim和Shapenetpart数据集最近的研究中实现了最先进的性能。此外,我们强调了我们网络拓展到现实世界和未满足的场景的能力。
translated by 谷歌翻译
MLP-MIXER新出现为反对CNNS和变压器领域的新挑战者。尽管与变压器相比,尽管其相比,频道混合MLP和令牌混合MLP的概念可以在视觉识别任务中实现明显的性能。与图像不同,点云本身稀疏,无序和不规则,这限制了MLP-MILER用于点云理解的直接使用。在本文中,我们提出了一种通用点集运算符,其促进非结构化3D点之间的信息共享。通过简单地用SoftMax函数替换令牌混合的MLP,PointMixer可以在点集之间“混合”功能。通过这样做,可以在网络中广泛地使用PointMixer作为设定间混合,内部混合和金字塔混合。广泛的实验表明了对基于变压器的方法的语义分割,分类和点重建中的引光器竞争或卓越的性能。
translated by 谷歌翻译
For low-level computer vision and image processing ML tasks, training on large datasets is critical for generalization. However, the standard practice of relying on real-world images primarily from the Internet comes with image quality, scalability, and privacy issues, especially in commercial contexts. To address this, we have developed a procedural synthetic data generation pipeline and dataset tailored to low-level vision tasks. Our Unreal engine-based synthetic data pipeline populates large scenes algorithmically with a combination of random 3D objects, materials, and geometric transformations. Then, we calibrate the camera noise profiles to synthesize the noisy images. From this pipeline, we generated a fully synthetic image denoising dataset (FSID) which consists of 175,000 noisy/clean image pairs. We then trained and validated a CNN-based denoising model, and demonstrated that the model trained on this synthetic data alone can achieve competitive denoising results when evaluated on real-world noisy images captured with smartphone cameras.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
The crystallization of modeling methods around the Transformer architecture has been a boon for practitioners. Simple, well-motivated architectural variations can transfer across tasks and scale, increasing the impact of modeling research. However, with the emergence of state-of-the-art 100B+ parameters models, large language models are increasingly expensive to accurately design and train. Notably, it can be difficult to evaluate how modeling decisions may impact emergent capabilities, given that these capabilities arise mainly from sheer scale alone. In the process of building BLOOM--the Big Science Large Open-science Open-access Multilingual language model--our goal is to identify an architecture and training setup that makes the best use of our 1,000,000 A100-GPU-hours budget. Specifically, we perform an ablation study at the billion-parameter scale comparing different modeling practices and their impact on zero-shot generalization. In addition, we study the impact of various popular pre-training corpora on zero-shot generalization. We also study the performance of a multilingual model and how it compares to the English-only one. Finally, we consider the scaling behaviour of Transformers to choose the target model size, shape, and training setup. All our models and code are open-sourced at https://huggingface.co/bigscience .
translated by 谷歌翻译
在公共危机时期,寻求信息对于人们的自我保健和福祉至关重要。广泛的研究调查了经验理解和技术解决方案,以促进受影响地区的家庭公民寻求信息。但是,建立有限的知识是为了支持需要在其东道国发生危机的国际移民。当前的论文对居住在日本和美国(n = 14)的两名中国移民(n = 14)进行了访谈研究。参与者反思了他们在共同大流行期间寻求经验的信息。反思补充了两周的自我追踪,参与者保持了相关信息寻求实践的记录。我们的数据表明,参与者经常绕开语言绕道,或访问普通话资源以获取有关其东道国疫情爆发的信息。他们还进行了战略性利用普通话信息,以进行选择性阅读,交叉检查以及对日语或英语的共同信息的上下文化解释。尽管这种做法增强了参与者对共同相关信息收集和感官的有效性,但他们有时会通过有时认识的方式使人们处于不利地位。此外,参与者缺乏对审查以移民为导向的信息的认识或偏爱,尽管该信息可用,这些信息是由东道国公共当局发布的。在这些发现的基础上,我们讨论了改善国际移民在非本地语言和文化环境中寻求共同相关信息的解决方案。我们主张包容性危机基础设施,这些基础设施将吸引以当地语言流利程度,信息素养和利用公共服务的经验的不同水平的人们。
translated by 谷歌翻译
非本地(NL)块是一个流行的模块,它展示了模拟全局上下文的功能。但是,NL块通常具有沉重的计算和记忆成本,因此将块应用于高分辨率特征图是不切实际的。在本文中,为了研究NL块的功效,我们经验分析了输入特征向量的大小和方向是否正确影响向量之间的注意力。结果表明,SoftMax操作的效率低下,该操作通常用于将NL块的注意力图归一化。通过软磁性操作归一化的注意力图极大地依赖于关键向量的大小,并且如果删除幅度信息,则性能将退化。通过用缩放系数替换SoftMax操作,我们证明了CIFAR-10,CIFAR-100和TININE-IMAGENET的性能提高。此外,我们的方法显示了嵌入通道减少和嵌入重量初始化的鲁棒性。值得注意的是,我们的方法在没有额外的计算成本的情况下使多头注意力可用。
translated by 谷歌翻译